AAAI.2023 - Constraint Satisfaction and Optimization

Total: 27

#1 Improved Algorithms for Maximum Satisfiability and Its Special Cases [PDF] [Copy] [Kimi]

Authors: Kirill Brilliantov ; Vasily Alferov ; Ivan Bliznets

The Maximum Satisfiability (MAXSAT) problem is an optimization version of the Satisfiability problem (SAT) in which one is given a CNF formula with n variables and needs to find the maximum number of simultaneously satisfiable clauses. Recent works achieved significant progress in proving new upper bounds on the worst-case computational complexity of MAXSAT. All these works reduce general MAXSAT to a special case of MAXSAT where each variable appears a small number of times. So, it is important to design fast algorithms for (n,k)-MAXSAT to construct an efficient exact algorithm for MAXSAT. (n,k)-MAXSAT is a special case of MAXSAT where each variable appears at most k times in the input formula. For the (n,3)-MAXSAT problem, we design a O*(1.1749^n) algorithm improving on the previous record running time of O*(1.191^n). For the (n,4)-MAXSAT problem, we construct a O*(1.3803^n) algorithm improving on the previous best running time of O*(1.4254^n). Using the results, we develop a O*(1.0911^L) algorithm for the MAXSAT where L is a length of the input formula which improves previous algorithm with O*(1.0927^L) running time.

#2 Lifting (D)QBF Preprocessing and Solving Techniques to (D)SSAT [PDF] [Copy] [Kimi]

Authors: Che Cheng ; Jie-Hong R. Jiang

Dependency stochastic Boolean satisfiability (DSSAT) generalizes stochastic Boolean satisfiability (SSAT) in existential variables being Henkinized allowing their dependencies on randomized variables to be explicitly specified. It allows NEXPTIME problems of reasoning under uncertainty and partial information to be compactly encoded. To date, no decision procedure has been implemented for solving DSSAT formulas. This work provides the first such tool by converting DSSAT into SSAT with dependency elimination, similar to converting dependency quantified Boolean formula (DQBF) to quantified Boolean formula (QBF). Moreover, we extend (D)QBF preprocessing techniques and implement the first standalone (D)SSAT preprocessor. Experimental results show that solving DSSAT via dependency elimination is highly applicable and that existing SSAT solvers may benefit from preprocessing.

#3 NuWLS: Improving Local Search for (Weighted) Partial MaxSAT by New Weighting Techniques [PDF] [Copy] [Kimi]

Authors: Yi Chu ; Shaowei Cai ; Chuan Luo

Maximum Satisfiability (MaxSAT) is a prototypical constraint optimization problem, and its generalized version is the (Weighted) Partial MaxSAT problem, denoted as (W)PMS, which deals with hard and soft clauses. Considerable progress has been made on stochastic local search (SLS) algorithms for solving (W)PMS, which mainly focus on clause weighting techniques. In this work, we identify two issues of existing clause weighting techniques for (W)PMS, and propose two ideas correspondingly. First, we observe that the initial values of soft clause weights have a big effect on the performance of the SLS solver for solving (W)PMS, and propose a weight initialization method. Second, we propose a new clause weighting scheme that for the first time employs different conditions for updating hard and soft clause weights. Based on these two ideas, we develop a new SLS solver for (W)PMS named NuWLS. Through extensive experiments, NuWLS performs much better than existing SLS solvers on all 6 benchmarks from the incomplete tracks of MaxSAT Evaluations (MSEs) 2019, 2020, and 2021. In terms of the number of winning instances, NuWLS outperforms state-of-the-art SAT-based incomplete solvers on all the 6 benchmarks. More encouragingly, a hybrid solver that combines NuWLS and an SAT-based solver won all four categories in the incomplete track of the MaxSAT Evaluation 2022.

#4 Separate but Equal: Equality in Belief Propagation for Single Cycle Graphs [PDF] [Copy] [Kimi]

Authors: Erel Cohen ; Omer Lev ; Roie Zivan

Belief propagation is a widely used incomplete optimization algorithm, whose main theoretical properties hold only under the assumptions that beliefs are not equal. Nevertheless, there is much evidence that equality between beliefs does occur. A method to overcome belief equality by using unary function-nodes is assumed to resolve the problem. We focus on Min-sum, the belief propagation version for solving constraint optimization problems. We prove that on a single cycle graph, belief equality can be avoided only when the algorithm converges to the optimal solution. In any other case, the unary function methods will not prevent equality, rendering some existing results in need of reassessment. We differentiate between belief equality, which includes equal beliefs in a single message, and assignment equality, that prevents a coherent selection of assignments to variables. We show the necessary and satisfying conditions for both.

#5 Complexity of Reasoning with Cardinality Minimality Conditions [PDF] [Copy] [Kimi]

Authors: Nadia Creignou ; Frédéric Olive ; Johannes Schmidt

Many AI-related reasoning problems are based on the problem of satisfiability of propositional formulas with some cardinality-minimality condition. While the complexity of the satisfiability problem (SAT) is well understood when considering systematically all fragments of propositional logic within Schaefer’s framework, this is not the case when such minimality condition is added. We consider the CardMinSat problem, which asks, given a formula φ and an atom x, whether x is true in some cardinality-minimal model of φ. We completely classify the computational complexity of the CardMinSat problem within Schaefer’s framework, thus paving the way for a better understanding of the tractability frontier of many AI-related reasoning problems. To this end we use advanced algebraic tools.

#6 DASH: A Distributed and Parallelizable Algorithm for Size-Constrained Submodular Maximization [PDF] [Copy] [Kimi]

Authors: Tonmoy Dey ; Yixin Chen ; Alan Kuhnle

MapReduce (MR) algorithms for maximizing monotone, submodular functions subject to a cardinality constraint (SMCC) are currently restricted to the use of the linear-adaptive (non-parallelizable) algorithm GREEDY. Low-adaptive algorithms do not satisfy the requirements of these distributed MR frameworks, thereby limiting their performance. We study the SMCC problem in a distributed setting and propose the first MR algorithms with sublinear adaptive complexity. Our algorithms, R-DASH, T-DASH and G-DASH provide 0.316 - ε, 3/8 - ε , and (1 - 1/e - ε) approximation ratios, respectively, with nearly optimal adaptive complexity and nearly linear time complexity. Additionally, we provide a framework to increase, under some mild assumptions, the maximum permissible cardinality constraint from O( n / ℓ^2) of prior MR algorithms to O( n / ℓ ), where n is the data size and ℓ is the number of machines; under a stronger condition on the objective function, we increase the maximum constraint value to n. Finally, we provide empirical evidence to demonstrate that our sublinear-adaptive, distributed algorithms provide orders of magnitude faster runtime compared to current state-of-the-art distributed algorithms.

#7 SharpSSAT: A Witness-Generating Stochastic Boolean Satisfiability Solver [PDF] [Copy] [Kimi]

Authors: Yu-Wei Fan ; Jie-Hong R. Jiang

Stochastic Boolean satisfiability (SSAT) is a formalism allowing decision-making for optimization under quantitative constraints. Although SSAT solvers are under active development, existing solvers do not provide Skolem-function witnesses, which are crucial for practical applications. In this work, we develop a new witness-generating SSAT solver, SharpSSAT, which integrates techniques, including component caching, clause learning, and pure literal detection. It can generate a set of Skolem functions witnessing the attained satisfying probability of a given SSAT formula. We also equip the solver ClauSSat with witness generation capability for comparison. Experimental results show that SharpSSAT outperforms current state-of-the-art solvers and can effectively generate compact Skolem-function witnesses. The new witness-generating solver may broaden the applicability of SSAT to practical applications.

#8 Submodular Maximization under the Intersection of Matroid and Knapsack Constraints [PDF] [Copy] [Kimi]

Authors: Yu-Ran Gu ; Chao Bian ; Chao Qian

Submodular maximization arises in many applications, and has attracted a lot of research attentions from various areas such as artificial intelligence, finance and operations research. Previous studies mainly consider only one kind of constraint, while many real-world problems often involve several constraints. In this paper, we consider the problem of submodular maximization under the intersection of two commonly used constraints, i.e., k-matroid constraint and m-knapsack constraint, and propose a new algorithm SPROUT by incorporating partial enumeration into the simultaneous greedy framework. We prove that SPROUT can achieve a polynomial-time approximation guarantee better than the state-of-the-art algorithms. Then, we introduce the random enumeration and smooth techniques into SPROUT to improve its efficiency, resulting in the SPROUT++ algorithm, which can keep a similar approximation guarantee. Experiments on the applications of movie recommendation and weighted max-cut demonstrate the superiority of SPROUT++ in practice.

#9 A Framework to Design Approximation Algorithms for Finding Diverse Solutions in Combinatorial Problems [PDF] [Copy] [Kimi]

Authors: Tesshu Hanaka ; Masashi Kiyomi ; Yasuaki Kobayashi ; Yusuke Kobayashi ; Kazuhiro Kurita ; Yota Otachi

Finding a \emph{single} best solution is the most common objective in combinatorial optimization problems. However, such a single solution may not be applicable to real-world problems as objective functions and constraints are only ``approximately'' formulated for original real-world problems. To solve this issue, finding \emph{multiple} solutions is a natural direction, and diversity of solutions is an important concept in this context. Unfortunately, finding diverse solutions is much harder than finding a single solution. To cope with the difficulty, we investigate the approximability of finding diverse solutions. As a main result, we propose a framework to design approximation algorithms for finding diverse solutions, which yields several outcomes including constant-factor approximation algorithms for finding diverse matchings in graphs and diverse common bases in two matroids and PTASes for finding diverse minimum cuts and interval schedulings.

#10 An Improved Approximation Algorithm for Wage Determination and Online Task Allocation in Crowd-Sourcing [PDF] [Copy] [Kimi]

Authors: Yuya Hikima ; Yasunori Akagi ; Hideaki Kim ; Taichi Asami

Crowd-sourcing has attracted much attention due to its growing importance to society, and numerous studies have been conducted on task allocation and wage determination. Recent works have focused on optimizing task allocation and workers' wages, simultaneously. However, existing methods do not provide good solutions for real-world crowd-sourcing platforms due to the low approximation ratio or myopic problem settings. We tackle an optimization problem for wage determination and online task allocation in crowd-sourcing and propose a fast 1-1/(k+3)^(1/2)-approximation algorithm, where k is the minimum of tasks' budgets (numbers of possible assignments). This approximation ratio is greater than or equal to the existing method. The proposed method reduces the tackled problem to a non-convex multi-period continuous optimization problem by approximating the objective function. Then, the method transforms the reduced problem into a minimum convex cost flow problem, which is a well-known combinatorial optimization problem, and solves it by the capacity scaling algorithm. Synthetic experiments and simulation experiments using real crowd-sourcing data show that the proposed method solves the problem faster and outputs higher objective values than existing methods.

#11 Predict+Optimize for Packing and Covering LPs with Unknown Parameters in Constraints [PDF] [Copy] [Kimi]

Authors: Xinyi Hu ; Jasper C.H. Lee ; Jimmy H.M. Lee

Predict+Optimize is a recently proposed framework which combines machine learning and constrained optimization, tackling optimization problems that contain parameters that are unknown at solving time. The goal is to predict the unknown parameters and use the estimates to solve for an estimated optimal solution to the optimization problem. However, all prior works have focused on the case where unknown parameters appear only in the optimization objective and not the constraints, for the simple reason that if the constraints were not known exactly, the estimated optimal solution might not even be feasible under the true parameters. The contributions of this paper are two-fold. First, we propose a novel and practically relevant framework for the Predict+Optimize setting, but with unknown parameters in both the objective and the constraints. We introduce the notion of a correction function, and an additional penalty term in the loss function, modelling practical scenarios where an estimated optimal solution can be modified into a feasible solution after the true parameters are revealed, but at an additional cost. Second, we propose a corresponding algorithmic approach for our framework, which handles all packing and covering linear programs. Our approach is inspired by the prior work of Mandi and Guns, though with crucial modifications and re-derivations for our very different setting. Experimentation demonstrates the superior empirical performance of our method over classical approaches.

#12 Solving Explainability Queries with Quantification: The Case of Feature Relevancy [PDF] [Copy] [Kimi]

Authors: Xuanxiang Huang ; Yacine Izza ; Joao Marques-Silva

Trustable explanations of machine learning (ML) models are vital in high-risk uses of artificial intelligence (AI). Apart from the computation of trustable explanations, a number of explainability queries have been identified and studied in recent work. Some of these queries involve solving quantification problems, either in propositional or in more expressive logics. This paper investigates one of these quantification problems, namely the feature relevancy problem (FRP), i.e.\ to decide whether a (possibly sensitive) feature can occur in some explanation of a prediction. In contrast with earlier work, that studied FRP for specific classifiers, this paper proposes a novel algorithm for the \fprob quantification problem which is applicable to any ML classifier that meets minor requirements. Furthermore, the paper shows that the novel algorithm is efficient in practice. The experimental results, obtained using random forests (RFs) induced from well-known publicly available datasets, demonstrate that the proposed solution outperforms existing state-of-the-art solvers for Quantified Boolean Formulas (QBF) by orders of magnitude. Finally, the paper also identifies a novel family of formulas that are challenging for currently state-of-the-art QBF solvers.

#13 Second-Order Quantified Boolean Logic [PDF] [Copy] [Kimi]

Author: Jie-Hong R. Jiang

Second-order quantified Boolean formulas (SOQBFs) generalize quantified Boolean formulas (QBFs) by admitting second-order quantifiers on function variables in addition to first-order quantifiers on atomic variables. Recent endeavors establish that the complexity of SOQBF satisfiability corresponds to the exponential-time hierarchy (EXPH), similar to that of QBF satisfiability corresponding to the polynomial-time hierarchy (PH). This fact reveals the succinct expression power of SOQBFs in encoding decision problems not efficiently doable by QBFs. In this paper, we investigate the second-order quantified Boolean logic with the following main results: First, we present a procedure of quantifier elimination converting SOQBFs to QBFs and a game interpretation of SOQBF semantics. Second, we devise a sound and complete refutation-proof system for SOQBF. Third, we develop an algorithm for countermodel extraction from a refutation proof. Finally, we show potential applications of SOQBFs in system design and multi-agent planning. With these advances, we anticipate practical tools for development.

#14 Learning Markov Random Fields for Combinatorial Structures via Sampling through Lovász Local Lemma [PDF] [Copy] [Kimi]

Authors: Nan Jiang ; Yi Gu ; Yexiang Xue

Learning to generate complex combinatorial structures satisfying constraints will have transformative impacts in many application domains. However, it is beyond the capabilities of existing approaches due to the highly intractable nature of the embedded probabilistic inference. Prior works spend most of the training time learning to separate valid from invalid structures but do not learn the inductive biases of valid structures. We develop NEural Lovasz Sampler (NELSON), which embeds the sampler through Lovasz Local Lemma (LLL) as a fully differentiable neural network layer. Our NELSON-CD embeds this sampler into the contrastive divergence learning process of Markov random fields. NELSON allows us to obtain valid samples from the current model distribution. Contrastive divergence is then applied to separate these samples from those in the training set. NELSON is implemented as a fully differentiable neural net, taking advantage of the parallelism of GPUs. Experimental results on several real-world domains reveal that NELSON learns to generate 100% valid structures, while baselines either time out or cannot ensure validity. NELSON also outperforms other approaches in running time, log-likelihood, and MAP scores.

#15 Fast Converging Anytime Model Counting [PDF] [Copy] [Kimi]

Authors: Yong Lai ; Kuldeep S. Meel ; Roland H.C. Yap

Model counting is a fundamental problem which has been influential in many applications, from artificial intelligence to formal verification. Due to the intrinsic hardness of model counting, approximate techniques have been developed to solve real-world instances of model counting. This paper designs a new anytime approach called PartialKC for approximate model counting. The idea is a form of partial knowledge compilation to provide an unbiased estimate of the model count which can converge to the exact count. Our empirical analysis demonstrates that PartialKC achieves significant scalability and accuracy over prior state-of-the-art approximate counters, including satss and STS. Interestingly, the empirical results show that PartialKC reaches convergence for many instances and therefore provides exact model counting performance comparable to state-of-the-art exact counters.

#16 Finding Good Partial Assignments during Restart-Based Branch and Bound Search [PDF] [Copy] [Kimi]

Authors: Hongbo Li ; Jimmy H.M. Lee

Restart-based Branch-and-Bound Search (BBS) is a standard algorithm for solving Constraint Optimization Problems (COPs). In this paper, we propose an approach to find good partial assignments to jumpstart search at each restart for general COPs, which are identified by comparing different best solutions found in different restart runs. We consider information extracted from historical solutions to evaluate the quality of the partial assignments. Thus the good partial assignments are dynamically updated as the current best solution evolves. Our approach makes restart-based BBS explore different promising sub-search-spaces to find high-quality solutions. Experiments on the MiniZinc benchmark suite show how our approach brings significant improvements to a black-box COP solver equipped with the state of the art search techniques. Our method finds better solutions and proves optimality for more instances.

#17 Hybrid Learning with New Value Function for the Maximum Common Induced Subgraph Problem [PDF] [Copy] [Kimi]

Authors: Yanli Liu ; Jiming Zhao ; Chu-Min Li ; Hua Jiang ; Kun He

Maximum Common Induced Subgraph (MCIS) is an important NP-hard problem with wide real-world applications. An efficient class of MCIS algorithms uses Branch-and-Bound (BnB), consisting in successively selecting vertices to match and pruning when it is discovered that a solution better than the best solution found so far does not exist. The method of selecting the vertices to match is essential for the performance of BnB. In this paper, we propose a new value function and a hybrid selection strategy used in reinforcement learning to define a new vertex selection method, and propose a new BnB algorithm, called McSplitDAL, for MCIS. Extensive experiments show that McSplitDAL significantly improves the current best BnB algorithms, McSplit+LL and McSplit+RL. An empirical analysis is also performed to illustrate why the new value function and the hybrid selection strategy are effective.

#18 Self-Supervised Primal-Dual Learning for Constrained Optimization [PDF] [Copy] [Kimi]

Authors: Seonho Park ; Pascal Van Hentenryck

This paper studies how to train machine-learning models that directly approximate the optimal solutions of constrained optimization problems. This is an empirical risk minimization under constraints, which is challenging as training must balance optimality and feasibility conditions. Supervised learning methods often approach this challenge by training the model on a large collection of pre-solved instances. This paper takes a different route and proposes the idea of Primal-Dual Learning (PDL), a self-supervised training method that does not require a set of pre-solved instances or an optimization solver for training and inference. Instead, PDL mimics the trajectory of an Augmented Lagrangian Method (ALM) and jointly trains primal and dual neural networks. Being a primal-dual method, PDL uses instance-specific penalties of the constraint terms in the loss function used to train the primal network. Experiments show that, on a set of nonlinear optimization benchmarks, PDL typically exhibits negligible constraint violations and minor optimality gaps, and is remarkably close to the ALM optimization. PDL also demonstrated improved or similar performance in terms of the optimality gaps, constraint violations, and training times compared to existing approaches.

#19 Reinforcement Learning for Branch-and-Bound Optimisation Using Retrospective Trajectories [PDF] [Copy] [Kimi]

Authors: Christopher W. F. Parsonson ; Alexandre Laterre ; Thomas D. Barrett

Combinatorial optimisation problems framed as mixed integer linear programmes (MILPs) are ubiquitous across a range of real-world applications. The canonical branch-and-bound algorithm seeks to exactly solve MILPs by constructing a search tree of increasingly constrained sub-problems. In practice, its solving time performance is dependent on heuristics, such as the choice of the next variable to constrain ('branching'). Recently, machine learning (ML) has emerged as a promising paradigm for branching. However, prior works have struggled to apply reinforcement learning (RL), citing sparse rewards, difficult exploration, and partial observability as significant challenges. Instead, leading ML methodologies resort to approximating high quality handcrafted heuristics with imitation learning (IL), which precludes the discovery of novel policies and requires expensive data labelling. In this work, we propose retro branching; a simple yet effective approach to RL for branching. By retrospectively deconstructing the search tree into multiple paths each contained within a sub-tree, we enable the agent to learn from shorter trajectories with more predictable next states. In experiments on four combinatorial tasks, our approach enables learning-to-branch without any expert guidance or pre-training. We outperform the current state-of-the-art RL branching algorithm by 3-5x and come within 20% of the best IL method's performance on MILPs with 500 constraints and 1000 variables, with ablations verifying that our retrospectively constructed trajectories are essential to achieving these results.

#20 Constraint Optimization over Semirings [PDF] [Copy] [Kimi]

Authors: A. Pavan ; Kuldeep S. Meel ; N. V. Vinodchandran ; Arnab Bhattacharyya

Interpretations of logical formulas over semirings (other than the Boolean semiring) have applications in various areas of computer science including logic, AI, databases, and security. Such interpretations provide richer information beyond the truth or falsity of a statement. Examples of such semirings include Viterbi semiring, min-max or access control semiring, tropical semiring, and fuzzy semiring. The present work investigates the complexity of constraint optimization problems over semirings. The generic optimization problem we study is the following: Given a propositional formula phi over n variable and a semiring (K,+, . ,0,1), find the maximum value over all possible interpretations of phi over K. This can be seen as a generalization of the well-known satisfiability problem (a propositional formula is satisfiable if and only if the maximum value over all interpretations/assignments over the Boolean semiring is 1). A related problem is to find an interpretation that achieves the maximum value. In this work, we first focus on these optimization problems over the Viterbi semiring, which we call optConfVal and optConf. We first show that for general propositional formulas in negation normal form, optConfVal and optConf are in FP^NP. We then investigate optConf when the input formula phi is represented in the conjunctive normal form. For CNF formulae, we first derive an upper bound on the value of optConf as a function of the number of maximum satisfiable clauses. In particular, we show that if r is the maximum number of satisfiable clauses in a CNF formula with m clauses, then its optConf value is at most 1/4^(m-r). Building on this we establish that optConf for CNF formulae is hard for the complexity class FP^NP[log]. We also design polynomial-time approximation algorithms and establish an inapproximability for optConfVal. We establish similar complexity results for these optimization problems over other semirings including tropical, fuzzy, and access control semirings.

#21 Generalized Confidence Constraints [PDF] [Copy] [Kimi]

Authors: Guillaume Perez ; Steve Malalel ; Gael Glorian ; Victor Jung ; Alexandre Papadopoulos ; Marie Pelleau ; Wijnand Suijlen ; Jean-Charles Régin ; Arnaud Lallouet

In robust optimization, finding a solution that solely respects the constraints is not enough. Usually, the uncertainty and unknown parameters of the model are represented by random variables. In such conditions, a good solution is a solution robust to most-likely assignments of these random variables. Recently, the Confidence constraint has been introduced by Mercier-Aubin et al. in order to enforce this type of robustness in constraint programming. Unfortunately, it is restricted to a conjunction of binary inequalities In this paper, we generalize the Confidence constraint to any constraint and propose an implementation based on Multi-valued Decision Diagrams (MDDs). The Confidence constraint is defined over a vector of random variables. For a given constraint C, and given a threshold, the Confidence constraint ensures that the probability for C to be satisfied by a sample of the random variables is greater than the threshold. We propose to use MDDs to represent the constraints on the random variables. MDDs are an efficient tool for representing combinatorial constraints, thanks to their exponential compression power. Here, both random and decision variables are stored in the MDD, and propagation rules are proposed for removing values of decision variables that cannot lead to robust solutions. Furthermore, for several constraints, we show that decision variables can be omitted from the MDD because lighter filtering algorithms are sufficient. This leads to gain an exponential factor in the MDD size. The experimental results obtained on a chemical deliveries problem in factories – where the chemicals consumption are uncertain – shows the efficiency of the proposed approach.

#22 Circuit Minimization with QBF-Based Exact Synthesis [PDF] [Copy] [Kimi]

Authors: Franz-Xaver Reichl ; Friedrich Slivovsky ; Stefan Szeider

This paper presents a rewriting method for Boolean circuits that minimizes small subcircuits with exact synthesis. Individual synthesis tasks are encoded as Quantified Boolean Formulas (QBFs) that capture the full flexibility for implementing multi-output subcircuits. This is in contrast to SAT-based resynthesis, where "don't cares" are computed for an individual gate, and replacements are confined to the circuitry used exclusively by that gate. An implementation of our method achieved substantial size reductions compared to state-of-the-art methods across a wide range of benchmark circuits.

#23 Probabilistic Generalization of Backdoor Trees with Application to SAT [PDF] [Copy] [Kimi]

Authors: Alexander Semenov ; Daniil Chivilikhin ; Stepan Kochemazov ; Ibragim Dzhiblavi

The concept of Strong Backdoor Sets (SBS) for Constraint Satisfaction Problems is well known as one of the attempts to exploit structural peculiarities in hard instances. However, in practice, finding an SBS for a particular instance is often harder than solving it. Recently, a probabilistic weakened variant of the SBS was introduced: in the SBS, all subproblems must be polynomially solvable, whereas in the probabilistic SBS only a large fraction ρ of them should have this property. This new variant of backdoors called ρ-backdoors makes it possible to use the Monte Carlo method and metaheuristic optimization to find ρ-backdoors with ρ very close to 1, and relatively fast. Despite the fact that in a ρ-backdoor-based decomposition a portion of hard subproblems remain, in practice the narrowing of the search space often allows solving the problem faster with such a backdoor than without it. In this paper, we significantly improve on the concept of ρ-backdoors by extending this concept to backdoor trees: we introduce ρ-backdoor trees, show the interconnections between SBS, ρ-backdoors, and the corresponding backdoor trees, and establish some new theoretical properties of backdoor trees. In the experimental part of the paper, we show that moving from the metaheuristic search for ρ-backdoors to that of ρ-backdoor trees allows drastically reducing the time required to construct the required decompositions without compromising their quality.

#24 The Expressive Power of Ad-Hoc Constraints for Modelling CSPs [PDF] [Copy] [Kimi]

Authors: Ruiwei Wang ; Roland H.C. Yap

Ad-hoc constraints (also called generic constraints) are important for modelling Constraint Satisfaction Problems (CSPs). Many representations have been proposed to define ad-hoc constraints, such as tables, decision diagrams, binary constraint trees, automata and context-free grammars. However, prior works mainly focus on efficient Generalized Arc Consistency (GAC) propagators of ad-hoc constraints using the representations. In this paper, we ask a more fundamental question which bears on modelling constraints in a CSP as ad-hoc constraints, how the choice of constraints and operations affect tractability. Rather than ad-hoc constraints and their GAC propagators, our focus is on their expressive power in terms of succinctness (polysize) and cost of operations/queries (polytime). We use a large set of constraint families to investigate the expressive power of 14 existing ad-hoc constraints. We show a complete map of the succinctness of the ad-hoc constraints. We also present results on the tractability of applying various operations and queries on the ad-hoc constraints. Finally, we give case studies illustrating how our results can be useful for questions in the modelling of CSPs.

#25 Graphs, Constraints, and Search for the Abstraction and Reasoning Corpus [PDF] [Copy] [Kimi]

Authors: Yudong Xu ; Elias B. Khalil ; Scott Sanner

The Abstraction and Reasoning Corpus (ARC) aims at benchmarking the performance of general artificial intelligence algorithms. The ARC's focus on broad generalization and few-shot learning has made it difficult to solve using pure machine learning. A more promising approach has been to perform program synthesis within an appropriately designed Domain Specific Language (DSL). However, these too have seen limited success. We propose Abstract Reasoning with Graph Abstractions (ARGA), a new object-centric framework that first represents images using graphs and then performs a search for a correct program in a DSL that is based on the abstracted graph space. The complexity of this combinatorial search is tamed through the use of constraint acquisition, state hashing, and Tabu search. An extensive set of experiments demonstrates the promise of ARGA in tackling some of the complicated object-centric tasks of the ARC rather efficiently, producing programs that are correct and easy to understand.